专利摘要:
The camera block (14) includes a high resolution rolling shutter camera (16) and one or more low resolution global shutter cameras (18), for example monochrome spectral cameras. All cameras are oriented in the same direction and are triggerable together to simultaneously collect a high resolution image (I0) and at least one low resolution image (I1-I4) of the same scene seen by the drone. Image processing means (22) determines the wobble type deformations present in the high resolution image and absent from the low resolution images, and combines the high resolution image (I0) and the low resolution images (I1-I4). to output a high resolution image (0) corrected for these deformations.
公开号:FR3038431A1
申请号:FR1556105
申请日:2015-06-30
公开日:2017-01-06
发明作者:Eng Hon Sron
申请人:Parrot SA;
IPC主号:
专利说明:

The invention relates to the processing of digital images captured by a camera embedded in a mobile device, in particular a motorized flying machine such as a drone. The invention is advantageously applied to the images collected by the camera of a fixed-wing drone, in particular of the flying wing type such as the SenseFly eBee model, Cheseaux-Lausanne, Switzerland, which is a professional terrain mapping drone, used especially in agronomy for the monitoring of agricultural crops. The invention also applies to other types of drones, for example rotary-wing drones such as quadcopters, a typical example of which is the Bebop Drone of Parrot SA, Paris, France, which is equipped, in addition to a camera frontal, a camera with vertical aim collecting an image of the terrain overflown by the drone.
To obtain a fine cartographic representation of the terrain overflown, the camera used by the drone is a camera producing a high definition RGB color image (typically at least 12 Mpixel, up to 20 or even 40 Mpixel).
The cameras capable of meeting these specifications are cameras equipped with a rolling shutter type sensor, which is a sensor in which the pixels of the sensor are read line by line, the exposure of the photosites and the reading of the successive lines of the sensor taking place concurrently.
There is another type of sensor, called "global shutter1" which operates in two distinct stages, with an initial phase of exposure of the photosites of the sensor followed by a phase of global reading of the pixel data after the photosites have been exposed. One of the disadvantages of global shutter sensors is their relatively low definition, much lower than that of rolling shutter sensors (or otherwise at a prohibitive cost), as well as their lower sensitivity because during the entire duration of the reading phase the photosites are not exposed, unlike a rolling shutter sensor where the photosites are exposed much longer, even during line-by-line playback.
In contrast, the rolling shutter sensors have a major drawback which is that of the deformations undergone by the image due to external phenomena. Indeed, with a rolling shutter sensor the scene is not globally "frozen", but the lines constituting the image are not acquired at the same time for all the pixels of the image, the various movements occurring during the capturing an image generate within it deformations that are not the same from one line to the next.
In particular, changes in the attitude of the drone as well as the vibrations of the engines, etc., occurring during the capture of an image generate a well known defect, called wobble, which is a blur and distortion effect of the image. image, where rectilinear elements of the scene are typically rendered in the image outputted from the sensor in the form of a wavy line.
When the wobble effect results from changes in the attitude of the drone, it is possible to effectively correct this defect by using the measurements of the inertial unit of the drone, which delivers signals representative of the instantaneous rotations of the drone, and therefore of the camera, in the three axes of pitch, roll and yaw. Since the attitude of the drone can be acquired precisely for each of the lines in synchronism with the sensor of the camera, the application of an inverse transform of the gyrometric measurement makes it possible to compensate in real time the deformations of the picture. A technique of this type is described in particular in application FR 14 56302 of July 2, 2014 for "rotary-wing drone equipped with a video camera delivering stabilized image sequences", in the name of the applicant.
This technique of correction makes it possible especially to eliminate the wobble due to the rotations, artefact typical of the rotary wing drones such as the four-dicopteres.
In the case of fixed-wing drones, for example those of the "flying wing" type, this type of correction is not very effective, especially when the drone moves rapidly with respect to the captured scene. Indeed, the principle of line-by-line operation of the rolling shutter sensor induces complex deformations that are difficult to predict, depending on the speed and structure of the captured scene.
In fact, neither of the two types of sensor is really satisfactory: a camera equipped with a rolling shutter sensor will produce a high definition image, but subject to multiple and unpredictable deformations, while a camera equipped with a global sensor shutter will produce an image free of distortions (the scene captured being the one that was "frozen" at the end of the exposure phase, just before the beginning of the reading phase), but with a much lower resolution and also a sensitivity less.
The aim of the invention is to propose a new type of camera block that combines the respective advantages of the two global shutter and rolling shutter sensors, but without their disadvantages, namely a camera block which has the following advantages: high resolution and high sensitivity (typical advantage of a conventional rolling shutter sensor); - no distortion of the image, especially wobble effect (advantage of global shutter sensors); and - all made from common components of large diffusion, reasonable cost.
The starting point of the invention is the observation that some drones embark a camera block incorporating in the same unit cameras provided with both types of sensors.
This is particularly the case of drones used in professional mapping, for example for crop control, hydrography, etc., which are provided with both a rolling shutter type main camera giving a high resolution RGB image. of the terrain overflown, and at least one (usually several) narrow-band monochrome camera for the spectral analysis of the terrain overflown. In particular, these spectral cameras measure the reflectance of crops, ie the amount of light reflected by the leaves, in different bands in order to obtain information on the state of photosynthesis.
These spectral cameras are global monochrome and relatively low resolution (typically of the order of 1 Mpixel) global shutter cameras operating on a narrow band of the spectrum (green, red, near infrared, etc.) and not providing a quality of light. exploitable image as such for purposes other than spectral analysis. The basic idea of the invention is to use the image or images delivered by the spectral cameras (low-definition images, monochrome, but devoid of distortions due to the operating principle of their global shutter sensor) to correct the distortions of the image. image of the rolling shutter main camera (which is a camera giving images of high definition, RGB, but subject to the distortions that we explained above).
More specifically, the invention proposes a camera block comprising, in a manner known per se, a rolling shutter type camera comprising a digital sensor at a first resolution, a high resolution, and at least one global shutter type camera comprising a digital sensor. at a second resolution, low resolution, less than said first resolution. The rolling shutter camera and the at least one global shutter camera have their optical axes oriented in the same direction and are releasable together so as to simultaneously collect a high resolution image and at least a low resolution image of the same scene seen by the drone. .
In a characteristic manner of the invention, the camera block also comprises image processing means, able to: determine wobble-type deformations present in the high-resolution image and absent from the at least one low-resolution image; and combining the high resolution image and the at least one low resolution image to output a corrected high resolution image of said deformations.
In a first embodiment, the image processing means comprise means capable of: finding points of interest in the high resolution image and in the at least one low resolution image; respectively mapping the points of interest of the high resolution image to those of the at least one low resolution image; calculating the respective displacements of the points of interest of the high resolution image with respect to the corresponding points of interest of the at least one low resolution image; determining a transformation defined by all of said displacements; and applying to the high resolution image an inverse transformation of said transformation.
In a second embodiment, the image processing means comprise means capable of: constructing a representation of the scene from the at least one low resolution image; to determine the movements of the camera block undergone during the duration of the collection of the high resolution image, from signals delivered by gyrometric, accelerometric and / or geolocation sensors of the drone; and projecting point by point, taking into account said motions, each pixel of the high resolution image as a texture on said representation of the scene constructed from the at least one low resolution image.
According to various advantageous subsidiary characteristics: the rolling shutter camera is an RGB camera; the at least one global shutter camera is a monochrome camera; the at least one global shutter camera is a set of four narrow-band spectral cameras; the resolution of the sensor of the rolling shutter camera is at least 12 Mpixel; the resolution of the sensor of the at least one global camera shutter is at least 1 Mpixel. The invention also relates to an image processing method, comprising in a manner known per se: the acquisition of an image at a first resolution, high resolution, delivered by a rolling shutter type camera; the acquisition of at least one image at a second resolution, lower resolution, less than said first resolution, delivered by at least one respective global shutter type camera, the rolling shutter camera and the at least one global shutter camera having their optical axes oriented in the same direction and being detachable together so as to simultaneously collect said high resolution image and said at least one low resolution image of the same scene seen by the drone.
Typically, this method further comprises: - determining wobble type deformations present in the high resolution image and absent from the at least one low resolution image; - The combination of the high resolution image and the at least one low resolution image to output a corrected high resolution image of said deformations.
In a first embodiment, the method comprises the steps of: searching for points of interest in the high-resolution image and in the at least one low-resolution image; - respectively map the points of interest of the high resolution image and those of the at least one low resolution image; calculating the respective displacements of the points of interest of the high-resolution image with respect to the corresponding points of interest of the at least one low-resolution image; determining a transformation defined by all of said displacements; and - applying to the high resolution image an inverse transformation of said transformation.
In a second form of implementation, the method comprises the steps of: - constructing a representation of the scene from the at least one low resolution image; determining the movements of the camera block undergone during the duration of the collection of the high resolution image, based on signals delivered by gyrometric, accelerometric and / or geolocation sensors of the drone; projecting point by point, taking account of said movements, each pixel of the high resolution image on said representation of the scene constructed from the at least one low resolution image. 0
An embodiment of the present invention will now be described with reference to the appended drawings in which the same references designate identical or functionally similar elements from one figure to another.
Figure 1 is an overview showing a drone flying over a terrain whose scene is captured by an onboard camera.
Figure 2 shows the general structure of a camera block comprising a high definition rolling shutter camera and four global shutter spectral cameras.
FIG. 3 illustrates in the form of block diagrams of the various elements making it possible, from a plurality of images generated by the camera block of FIG. 2, to obtain a high resolution image free from deformations.
Figure 4 is a flowchart illustrating the various steps of a first embodiment of the invention.
Figure 5 is a diagram illustrating the different steps of a second embodiment of the invention. 0
Examples of embodiments and implementations of the invention will now be described.
In Figure 1 there is illustrated a drone 10, for example a fixed-wing drone type "flying wing" such as SenseFly eBee, flying over a terrain 12 whose drone will perform the mapping. For this, the drone is provided with a camera unit 14 for vertical aim, turned towards the field so as to capture the image of a scene formed of an approximately rectangular portion of the terrain.
FIG. 2 shows more precisely the different elements of the camera unit 14 which, in this example, comprises a set of five cameras 16, 18 whose optical axes are all oriented in the same direction, namely the direction of the vertical, the five cameras each delivering an image approximately of the same scene 16 overflown by the drone.
The camera unit 14 comprises a high-definition HD camera, referenced 16, of the rolling shutter type, for example with a resolution of 12 Mpixel or more (20 to 40 Mpixel) delivering an RGB image in all the colors of the visible spectrum.
Block 14 also includes four spectral cameras SP1 ... SP4, referenced 18, which are monochrome, low resolution (typically 1 Mpixel) cameras of global shutter type. These four cameras 18 are identical, except for the band of the spectrum to which they are sensitive. In an application to agriculture, these bands are for example narrow bands located in: green, red, extreme red (red edge) and near infrared.
As illustrated in FIG. 3, the five cameras 16 and 18 are triggered simultaneously by a common signal 20, and respectively deliver: - for the rolling shutter camera 16: an image signal lo, which is a high-resolution RGB signal, and - for the four global shutter 18 spectral cameras: respective I1-I4 image signals, which are monochrome signals, low resolution.
These signals lo on the one hand and I1-I4 on the other hand are combined in a unit 22 allowing, in the manner that will be explained below, to eliminate the distortions present in the image lo (distortions inherent in the use of a rolling shutter camera) from the images I1-I4 produced by the spectral cameras 18 (images which are not distorted, because of the use of a global shutter sensor, but which are monochrome images and in low resolution). The unit 22 can also receive gyrometric information and / or geolocation information (GPS) produced by the inertial unit (IMU) of the drone or by onboard sensors (set schematically by the block 24).
The processing performed by the block 22 can be a real-time, on-the-fly processing by a microcomputer or by a dedicated graphic processing unit (GPU).
It can also be executed in post-processing, the drone recording images lo and I1-I4 separately for later obtaining the image o free of distortions.
First mode of implementation of the invention
Figure 4 illustrates, in the form of a flowchart 100, the various steps of a first embodiment of the invention.
The first step (block 102) consists of acquiring the image data lo and I1-I4 captured simultaneously by the high definition rolling shutter camera 16 and the four global shutter cameras 8.
On each of the images, the method searches (block 104) for points of interest (ROI), for example by means of a FAST (Features from Accelerated Segment Test) type detector algorithm.
Since the high-definition camera 16 and the four spectral cameras 18 have their optical axes oriented in the same direction (substantially the direction of the vertical), it will be considered that the images given by the four spectral cameras 18, which are the undistorted images of the scene, are perfectly aligned, thus providing four position estimates per point of interest, which improves the final result.
It should also be noted that a temporal integration, in several images, makes it possible to substantially improve the correction, by reducing the effects of fugitive artefacts appearing only on an isolated image.
Once the points of interest acquired, the method seeks matches (step 106) between the POIs of the image lo, on the one hand, and those of the images H to U, on the other hand, so as to determine those POIs which are present both on the image lo and on one or more of the images h to U. When two corresponding respective POIs are found on lo and I1-I4, these are stored, as well as their position on each images. The next step (block 108) consists in calculating, from the data obtained, the displacement of each of the points of interest between the image I1-I4 (undeformed image) and the corresponding image lo (deformed image). The set of displacements calculated in this way makes it possible to determine (block 110) a transform representative of the deformation undergone by lo with respect to the undeformed h-U images.
It is then possible (step 112) to apply to l an inverse transformation of the transform determined in the previous step, so as to straighten the high resolution image, deformed lo into a high resolution undeformed image o.
It should be noted that the analysis of the high resolution image after correction of the distortions makes it possible, in a subsidiary manner, to determine the depths of each point of interest by analysis of the successive images, that is to say the component in the direction the optical axis between the camera and the point of interest in the field (the optical axis being the Z axis in a reference Χ, Υ, Ζ linked to the camera, X and Y being the coordinates on the left axes / right and up / down, compared to the image).
Second mode of implementation of the invention
Figure 5 illustrates in the form of a diagram 200 the different steps of a second embodiment of the invention.
As in the previous case, the image data lo and I1-I4 respectively delivered by the high definition camera 16 and by the four spectral cameras 18 are acquired simultaneously (blocks 202 and 204). The next step (block 206) consists in performing a reconstruction of the three-dimensional scene of the terrain overflown from the (low definition) images delivered by the spectral cameras I1-I4. This step can be implemented with software such as Pix4Dmapper from Pix4D, Lausanne, Switzerland.
This step makes it possible to establish a representation of the terrain topology with the position of the cameras with respect to this one, for each shot.
Moreover, from a precise time stamp (block 208) of the images taken by the rolling shutter high definition camera 16 and information delivered by the gyrometric sensors and / or geolocation embedded by the drone (block 210), it is possible to determine (block 212) the movements of the drone, and therefore the cameras, during the duration of the shooting (block 212), knowing the orientation of the camera for each pixel of the image.
We then operate (block 214) a point-by-point projection of each pixel of the image lo (high definition, deformed), as texture, on the 3D scene (low resolution, undistorted) reconstructed at step previous (206). The application of a texture (here the deformed image obtained by the rolling shutter camera 16) on a 3D scene as a texture is a step in itself known, used in most 3D software, in particular that cited upper.
The result of this projection is the image o, which is a corrected RGB high resolution image of the distortions presented by the image lo.
权利要求:
Claims (11)
[1" id="c-fr-0001]
A camera unit (14) adapted to be embedded in a drone (10), comprising: a rolling shutter type camera (16) comprising a digital sensor at a first resolution, high resolution; and at least one global shutter type camera (18) comprising a digital sensor at a second, low resolution resolution, less than said first resolution, the rolling shutter camera and the at least one global shutter camera having their optical axes oriented in the same direction and being detachable together so as to simultaneously collect a high resolution image (lo) and at least one low resolution image (I1-I4) of the same scene (12) seen by the drone, characterized in that it also comprises: image processing means (22), capable of: determining wobble type deformations present in the high resolution image and absent from the at least one low resolution image; and combining the high resolution image (lo) and the at least one low resolution image (I1-I4) to output a high resolution image (o) corrected for said deformations.
[2" id="c-fr-0002]
2. The camera block of claim 1, wherein the image processing means (22) comprises means capable of: • finding (104) points of interest in the high resolution image (lo) and in the at least one low resolution image (I1-I4); respectively mapping (106) the points of interest of the high resolution image to those of the at least one low resolution image; calculating (108) the respective displacements of the points of interest of the high resolution image with respect to the corresponding points of interest of the at least one low resolution image; determining (110) a transformation defined by all of said displacements; and applying (112) to the high resolution image (10) an inverse transformation of said transformation.
[3" id="c-fr-0003]
3. The camera block of claim 1, wherein the image processing means (22) comprise means able to: construct (206) a representation of the scene from the at least one low resolution image (hU) ; determining (212) the movements of the camera block undergone during the duration of the collection of the high-resolution image (lo), from signals delivered by gyrometric, accelerometric and / or geolocation sensors of the drone; projecting point by point (214), taking into account said movements, each pixel of the high resolution image (lo) as a texture on said representation of the scene constructed from the at least one low resolution image (l-i-U).
[4" id="c-fr-0004]
4. The camera block of claim 1, wherein the rolling shutter camera (16) is an RGB camera.
[5" id="c-fr-0005]
The camera block of claim 1, wherein the at least one global shutter camera (18) is a monochrome camera.
[6" id="c-fr-0006]
The camera block of claim 5, wherein the at least one global shutter camera (18) is a set of four narrow band spectral cameras.
[7" id="c-fr-0007]
The camera block of claim 1, wherein the resolution of the rolling shutter camera sensor (16) is at least 12 MPixel.
[8" id="c-fr-0008]
The camera block of claim 1, wherein the sensor resolution of the at least one global shutter camera (18) is at least 1 Mpixel.
[9" id="c-fr-0009]
An image processing method, comprising: - acquiring (102; 204) an image (lo) at a first resolution, high resolution, delivered by a rolling shutter type camera (16); the acquisition (102; 202) of at least one image (hU) at a second resolution, low resolution, less than said first resolution, delivered by at least one global shutter camera (18) respectively, the rolling camera shutter and the at least one global shutter camera having their optical axes oriented in the same direction and being detachable together so as to simultaneously collect said high resolution image (lo) and said at least one low resolution image (hU) of a same scene (12) seen by the drone, characterized in that it further comprises: - the determination of wobble type deformations present in the high resolution image and absent from the at least one low resolution image; - The combination of the high resolution image (lo) and the at least one low resolution image (I1-I4) for outputting a corrected high resolution image of said deformations.
[10" id="c-fr-0010]
The method of claim 9, comprising the steps of: - finding (104) points of interest in the high resolution image (lo) and in the at least one low resolution image (h-U); - respectively mapping (108) the points of interest of the high resolution image and those of the at least one low resolution image; calculating (110) the respective displacements of the points of interest of the high-resolution image with respect to the corresponding points of interest of the at least one low-resolution image; determining (110) a transformation defined by all of said displacements; and - applying (112) to the high resolution image (10) an inverse transformation of said transformation.
[11" id="c-fr-0011]
The method of claim 9, comprising the steps of: - constructing (206) a representation of the scene from the at least one low resolution image (h-U); determining (212) the movements of the camera block undergone during the duration of the collection of the high resolution image (lo), from signals delivered by gyrometric, accelerometric and / or geolocation sensors of the drone; projecting point by point (214), taking account of said movements, each pixel of the high resolution image (lo) on said representation of the scene constructed from the at least one low resolution image (h-U).
类似技术:
公开号 | 公开日 | 专利标题
EP3113103A1|2017-01-04|High-resolution camera unit for drone, with correction of wobble distortion
CA2592293C|2015-04-14|Method for processing images using automatic georeferencing of images derived from a pair of images captured in the same focal plane
EP3142354B1|2018-03-14|Drone with forward-looking camera with segmentation of the image of the sky in order to control the autoexposure
EP3171589B1|2018-06-06|Drone equipped with a video camera sending sequences of images corrected for the wobble effect
EP1843295B1|2017-11-01|Method of recreating the movements of the line of sight of an optical instrument
EP3007429B1|2016-12-28|Mobile apparatus, in particular a rotary-wing drone, provided with a video camera supplying image sequences with dynamic correction of the wobble effect
EP3005296B1|2020-08-05|Merging of a plurality of video flows
EP2457379B1|2020-01-15|Method for estimating a defect in an image capture system and associated systems
EP3114831B1|2021-06-09|Optimised video denoising for heterogeneous multisensor system
FR2996034A1|2014-03-28|Method for generating high dynamic range image representing scene in e.g. digital still camera, involves generating composite images by superposition of obtained images, and generating high dynamic range image using composite images
WO2018210647A1|2018-11-22|Method for processing an optical image acquired in the presence of aerosols and/or clouds in the atmosphere
FR3024577A1|2016-02-05|REAL-TIME DATA MANAGEMENT RELATING TO AN AIRCRAFT FLIGHT TEST
EP3054422A1|2016-08-10|Method for determining an axis of rotation of a tomography object and method for characterisation by tomography
EP1381872A1|2004-01-21|Method and device for evaluating a parameter of a moving object
FR3081590A1|2019-11-29|METHOD FOR INCREASING THE SPATIAL RESOLUTION OF A MULTI-SPECTRAL IMAGE FROM A PANCHROMATIC IMAGE
EP3723363A1|2020-10-14|Information processing device, information processing system, and information processing method
FR2966257A1|2012-04-20|METHOD AND APPARATUS FOR CONSTRUCTING A RELIEVE IMAGE FROM TWO-DIMENSIONAL IMAGES
EP2826017B1|2018-07-18|Method for determining the geometric parameters indicating the movement of a camera
FR3094857A1|2020-10-09|Pulsed light imaging process
FR3078427A1|2019-08-30|DYNAMIC DETECTION OF LIGHT PARASITE IN A DIGITAL IMAGE
FR2689354A1|1993-10-01|Recursive procedure for improving degraded video image - has movement detector to determine necessary digital image processing to compensate for movement during image acquisition
FR3059124A1|2018-05-25|METHOD FOR INDEXING VIDEOS, AND METHOD FOR SEARCHING A VIDEO SEQUENCE IN A VIDEOS BASE
FR3063559A1|2018-09-07|METHOD AND ELECTRONIC DEVICE FOR DETERMINING A MODEL IN PERSPECTIVE OF AN OBJECT, COMPUTER PROGRAM AND DISPLAY ELECTRONIC APPARATUS THEREFOR
FR3005226A1|2014-10-31|IMAGE ENTRY WITH ADDITION OF ACCUMULATION SIGNALS FOR ADJACENT PHOTODETECTORS
WO2015075085A1|2015-05-28|Method for estimating the movement of an object
同族专利:
公开号 | 公开日
CN106385538A|2017-02-08|
US20170006240A1|2017-01-05|
EP3113103A1|2017-01-04|
JP2017017696A|2017-01-19|
US10122949B2|2018-11-06|
FR3038431B1|2017-07-21|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US7812869B2|2007-05-11|2010-10-12|Aptina Imaging Corporation|Configurable pixel array system and method|
US8179446B2|2010-01-18|2012-05-15|Texas Instruments Incorporated|Video stabilization and reduction of rolling shutter distortion|
US8692198B2|2010-04-21|2014-04-08|Sionyx, Inc.|Photosensitive imaging devices and associated methods|
CN101963751B|2010-08-19|2011-11-30|西北工业大学|Device and method for acquiring high-resolution full-scene image in high dynamic range in real time|
US8686943B1|2011-05-13|2014-04-01|Imimtek, Inc.|Two-dimensional method and system enabling three-dimensional user interaction with a device|
US8723789B1|2011-02-11|2014-05-13|Imimtek, Inc.|Two-dimensional method and system enabling three-dimensional user interaction with a device|
US9347792B2|2011-10-31|2016-05-24|Honeywell International Inc.|Systems and methods for displaying images with multi-resolution integration|
FR3020169A1|2014-04-16|2015-10-23|Parrot|ROTATING WING DRONE WITH VIDEO CAMERA DELIVERING STABILIZED IMAGE SEQUENCES|FR3052556B1|2016-06-13|2018-07-06|Parrot Drones|IMAGING ASSEMBLY FOR DRONE AND SYSTEM COMPRISING SUCH AN ASSEMBLY MOUNTED ON A FLYING DRONE|
CN110249424A|2017-02-02|2019-09-17|株式会社钟化|Interlayer thermal bonding component, interlayer thermal bonding method, the manufacturing method of interlayer thermal bonding component|
CN110463187B|2017-03-28|2021-09-21|富士胶片株式会社|Image pickup apparatus, image pickup method, and storage medium|
KR102330264B1|2017-08-04|2021-11-23|삼성전자주식회사|Electronic device for playing movie based on movment information and operating mehtod thereof|
US10491778B2|2017-09-21|2019-11-26|Honeywell International Inc.|Applying features of low-resolution data to corresponding high-resolution data|
CN107948540B|2017-12-28|2020-08-25|信利光电股份有限公司|Road monitoring camera and method for shooting road monitoring image|
GB2585416B|2017-12-28|2022-03-16|Landmark Graphics Corp|Gridding global data into a minimally distorted global raster|
KR101987439B1|2018-02-08|2019-06-10|이병섭|Unmanned air vehicle preventing an after image screen of ground picture|
US10778916B2|2018-10-24|2020-09-15|Honeywell International Inc.|Applying an annotation to an image based on keypoints|
CN109600556B|2019-02-18|2020-11-06|武汉大学|High-quality precise panoramic imaging system and method based on single lens reflex|
CN112204946A|2019-10-28|2021-01-08|深圳市大疆创新科技有限公司|Data processing method, device, movable platform and computer readable storage medium|
法律状态:
2016-06-23| PLFP| Fee payment|Year of fee payment: 2 |
2017-01-06| PLSC| Search report ready|Effective date: 20170106 |
2017-06-16| PLFP| Fee payment|Year of fee payment: 3 |
2017-07-21| TP| Transmission of property|Owner name: PARROT DRONES, FR Effective date: 20170616 |
2018-06-15| PLFP| Fee payment|Year of fee payment: 4 |
2020-03-13| ST| Notification of lapse|Effective date: 20200206 |
优先权:
申请号 | 申请日 | 专利标题
FR1556105A|FR3038431B1|2015-06-30|2015-06-30|HIGH RESOLUTION CAMERA BLOCK FOR DRONE, WITH CORRECTION OF ONDULATING OSCILLATION TYPE INSTABILITIES|FR1556105A| FR3038431B1|2015-06-30|2015-06-30|HIGH RESOLUTION CAMERA BLOCK FOR DRONE, WITH CORRECTION OF ONDULATING OSCILLATION TYPE INSTABILITIES|
US15/188,441| US10122949B2|2015-06-30|2016-06-21|High-resolution camera unit for a drone, with correction of the wobble-type distortions|
EP16175818.0A| EP3113103A1|2015-06-30|2016-06-22|High-resolution camera unit for drone, with correction of wobble distortion|
JP2016126137A| JP2017017696A|2015-06-30|2016-06-27|High resolution camera for unmanned aircraft involving correction of wobble type distortion|
CN201610832646.5A| CN106385538A|2015-06-30|2016-06-29|High-resolution camera unit for drone, with correction of wobble-type distortions|
[返回顶部]